We propose a novel teacher-student model for semi-supervised multi-organ segmentation. In teacher-student model, data augmentation is usually adopted on unlabeled data to regularize the consistent training between teacher and student. We start from a key perspective that fixed relative locations and variable sizes of different organs can provide distribution information where a multi-organ CT scan is drawn. Thus, we treat the prior anatomy as a strong tool to guide the data augmentation and reduce the mismatch between labeled and unlabeled images for semi-supervised learning. More specifically, we propose a data augmentation strategy based on partition-and-recovery N$^3$ cubes cross- and within- labeled and unlabeled images. Our strategy encourages unlabeled images to learn organ semantics in relative locations from the labeled images (cross-branch) and enhances the learning ability for small organs (within-branch). For within-branch, we further propose to refine the quality of pseudo labels by blending the learned representations from small cubes to incorporate local attributes. Our method is termed as MagicNet, since it treats the CT volume as a magic-cube and $N^3$-cube partition-and-recovery process matches with the rule of playing a magic-cube. Extensive experiments on two public CT multi-organ datasets demonstrate the effectiveness of MagicNet, and noticeably outperforms state-of-the-art semi-supervised medical image segmentation approaches, with +7% DSC improvement on MACT dataset with 10% labeled images.
translated by 谷歌翻译
Contemporary methods have shown promising results on cardiac image segmentation, but merely in static learning, i.e., optimizing the network once for all, ignoring potential needs for model updating. In real-world scenarios, new data continues to be gathered from multiple institutions over time and new demands keep growing to pursue more satisfying performance. The desired model should incrementally learn from each incoming dataset and progressively update with improved functionality as time goes by. As the datasets sequentially delivered from multiple sites are normally heterogenous with domain discrepancy, each updated model should not catastrophically forget previously learned domains while well generalizing to currently arrived domains or even unseen domains. In medical scenarios, this is particularly challenging as accessing or storing past data is commonly not allowed due to data privacy. To this end, we propose a novel domain-incremental learning framework to recover past domain inputs first and then regularly replay them during model optimization. Particularly, we first present a style-oriented replay module to enable structure-realistic and memory-efficient reproduction of past data, and then incorporate the replayed past data to jointly optimize the model with current data to alleviate catastrophic forgetting. During optimization, we additionally perform domain-sensitive feature whitening to suppress model's dependency on features that are sensitive to domain changes (e.g., domain-distinctive style features) to assist domain-invariant feature exploration and gradually improve the generalization performance of the network. We have extensively evaluated our approach with the M&Ms Dataset in single-domain and compound-domain incremental learning settings with improved performance over other comparison approaches.
translated by 谷歌翻译
共同出现的视觉模式使上下文聚集成为语义分割的重要范式。现有的研究重点是建模图像中的上下文,同时忽略图像以下相应类别的有价值的语义。为此,我们提出了一个新颖的软采矿上下文信息,超出了名为McIbi ++的图像范式,以进一步提高像素级表示。具体来说,我们首先设置了动态更新的内存模块,以存储各种类别的数据集级别的分布信息,然后利用信息在网络转发过程中产生数据集级别类别表示。之后,我们为每个像素表示形式生成一个类概率分布,并以类概率分布作为权重进行数据集级上下文聚合。最后,使用汇总的数据集级别和传统的图像级上下文信息来增强原始像素表示。此外,在推论阶段,我们还设计了一种粗到最新的迭代推理策略,以进一步提高分割结果。 MCIBI ++可以轻松地纳入现有的分割框架中,并带来一致的性能改进。此外,MCIBI ++可以扩展到视频语义分割框架中,比基线进行了大量改进。配备MCIBI ++,我们在七个具有挑战性的图像或视频语义分段基准测试中实现了最先进的性能。
translated by 谷歌翻译
脑膜瘤等级的术前和非侵入性预测在临床实践中很重要,因为它直接影响临床决策。更重要的是,脑膜瘤中的大脑侵袭(即,在相邻脑组织中存在肿瘤组织)是脑膜瘤分级的独立标准,并影响了治疗策略。尽管据报道已经努力解决这两个任务,但其中大多数依赖于手工制作的功能,并且没有尝试同时利用这两个预测任务。在本文中,我们提出了一种新型的任务意识到的对比学习算法,以共同预测来自多模式MRI的脑膜瘤等级和脑部侵袭。基于基本的多任务学习框架,我们的关键思想是采用对比度学习策略,以将图像功能分解为特定于任务的功能和任务遵守功能,并明确利用其固有的连接以改善两个预测任务的功能表示形式。在这项回顾性研究中,收集了一个MRI数据集,通过病理分析,有800名患者(含有148个高级,62名侵袭)患有脑膜瘤。实验结果表明,所提出的算法的表现优于替代性多任务学习方法,其AUCS分别为0:8870和0:9787,分别用于预测脑膜瘤等级和脑部侵袭。该代码可在https://github.com/isdling/predicttcl上找到。
translated by 谷歌翻译
多模式MR成像通常用于临床实践中,以通过提供丰富的互补信息来诊断和研究脑肿瘤。以前的多模式MRI分割方法通常通过在网络的早期/中阶段连接多模式MRIS来执行模态融合,这几乎无法探索模态之间的非线性依赖性。在这项工作中,我们提出了一种新型的嵌套模态感知变压器(嵌套形式),以明确探索多模式MRIS在脑肿瘤分割中的模式内和模式间关系。我们建立在基于变压器的多模型和单一码头结构的基础上,我们对不同模式的高级表示进行嵌套的多模式融合,并在较低的尺度上应用对模态敏感的门控(MSG),以进行更有效的跳过连接。具体而言,多模式融合是在我们提出的嵌套模态感知特征聚合(NMAFA)模块中进行的,该模块通过三个方向的空间意见变压器增强了单个模态内的长期依赖性,并进一步补充了模态信息之间的关键情境信息。通过跨模式注意变压器。关于BRATS2020基准和私人脑膜瘤细分(Maniseg)数据集的广泛实验表明,嵌套形式显然比最先进的表现优于最先进的。该代码可从https://github.com/920232796/nestedformer获得。
translated by 谷歌翻译
特征金字塔网络(FPN)是对象检测器的关键组件之一。但是,对于研究人员来说,长期存在的难题是,引入FPN后通常会抑制大规模物体的检测性能。为此,本文首先在检测框架中重新审视FPN,并从优化的角度揭示了FPN成功的性质。然后,我们指出,大规模对象的性能退化是由于集成FPN后出现不当后传播路径所致。它使每个骨干网络的每个级别都只能查看一定尺度范围内的对象。基于这些分析,提出了两种可行的策略,以使每个级别的级别能够查看基于FPN的检测框架中的所有对象。具体而言,一个是引入辅助目标功能,以使每个骨干级在训练过程中直接接收各种尺度对象的后传播信号。另一个是以更合理的方式构建特征金字塔,以避免非理性的背部传播路径。对可可基准测试的广泛实验验证了我们的分析的健全性和方法的有效性。没有铃铛和口哨,我们证明了我们的方法在各种检测框架上实现了可靠的改进(超过2%):一阶段,两阶段,基于锚的,无锚和变压器的检测器。
translated by 谷歌翻译
现代深层神经网络在部署到现实世界应用程序时努力转移知识并跨越不同领域的知识。当前,引入了域的概括(DG),以从多个域中学习通用表示,以提高看不见的域的网络泛化能力。但是,以前的DG方法仅关注数据级的一致性方案,而无需考虑不同一致性方案之间的协同正则化。在本文中,我们通过通过协同整合外在的一致性和内在的一致性来提出一个新型的域概括(HCDG)层次一致性框架。特别是对于外部一致性,我们利用跨多个源域的知识来强制数据级的一致性。为了更好地提高这种一致性,我们将新型的高斯混合策略设计为基于傅立叶的数据增强,称为domainup。对于固有的一致性,我们在双重任务方案下对同一实例执行任务级的一致性。我们在两个医学图像分割任务上评估了提出的HCDG框架,即对眼底图像和前列腺MRI分割的视频杯/圆盘分割。广泛的实验结果表明了我们的HCDG框架的有效性和多功能性。
translated by 谷歌翻译
批准(BN)均匀地基于一批图像的统计数据均匀地移动并缩放激活。但是,背景像素的强度分布通常主导了BN统计数据,因为背景占整个图像的很大比例。本文着重于通过前景像素的强度分布增强BN,这对于图像分割至关重要。我们提出了一种新的归一化策略,称为分类归一化(结合型),以根据分类统计数据使激活归一化。分类统计数据是通过动态调节属于前景的图像中的特定区域而获得的。结合型在从不同域获得的五个公共数据集展示了精确和稳健的分割结果,涵盖了复杂和可变的数据分布。这归因于结合体从医疗数据的多个领域(机构)捕获域不变的信息的能力。代码可从https://github.com/lambert-x/catenorm获得。
translated by 谷歌翻译
Training deep convolutional neural networks usually requires a large amount of labeled data. However, it is expensive and timeconsuming to annotate data for medical image segmentation tasks. In this paper, we present a novel uncertainty-aware semi-supervised framework for left atrium segmentation from 3D MR images. Our framework can effectively leverage the unlabeled data by encouraging consistent predictions of the same input under different perturbations. Concretely, the framework consists of a student model and a teacher model, and the student model learns from the teacher model by minimizing a segmentation loss and a consistency loss with respect to the targets of the teacher model. We design a novel uncertainty-aware scheme to enable the student model to gradually learn from the meaningful and reliable targets by exploiting the uncertainty information. Experiments show that our method achieves high performance gains by incorporating the unlabeled data. Our method outperforms the state-of-the-art semi-supervised methods, demonstrating the potential of our framework for the challenging semi-supervised problems 3 .
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译